首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18073篇
  免费   801篇
  国内免费   903篇
电工技术   437篇
技术理论   1篇
综合类   731篇
化学工业   332篇
金属工艺   385篇
机械仪表   1335篇
建筑科学   468篇
矿业工程   288篇
能源动力   252篇
轻工业   146篇
水利工程   69篇
石油天然气   170篇
武器工业   193篇
无线电   1955篇
一般工业技术   758篇
冶金工业   286篇
原子能技术   505篇
自动化技术   11466篇
  2024年   15篇
  2023年   147篇
  2022年   194篇
  2021年   287篇
  2020年   301篇
  2019年   207篇
  2018年   271篇
  2017年   384篇
  2016年   448篇
  2015年   435篇
  2014年   962篇
  2013年   814篇
  2012年   931篇
  2011年   1256篇
  2010年   834篇
  2009年   924篇
  2008年   1067篇
  2007年   1329篇
  2006年   1305篇
  2005年   1273篇
  2004年   1149篇
  2003年   1076篇
  2002年   810篇
  2001年   588篇
  2000年   488篇
  1999年   427篇
  1998年   312篇
  1997年   237篇
  1996年   226篇
  1995年   203篇
  1994年   142篇
  1993年   112篇
  1992年   71篇
  1991年   57篇
  1990年   57篇
  1989年   52篇
  1988年   44篇
  1987年   35篇
  1986年   33篇
  1985年   39篇
  1984年   35篇
  1983年   25篇
  1982年   28篇
  1981年   29篇
  1980年   17篇
  1979年   13篇
  1978年   17篇
  1977年   16篇
  1976年   31篇
  1975年   8篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
Today, construction planning and scheduling is almost always performed manually, by experienced practitioners. The knowledge of those individuals is materialized, maintained, and propagated through master schedules and look-ahead plans. While historical project schedules are available, manually mining their embedded knowledge to create generic work templates for future projects or revising look-ahead schedules is very difficult, time-consuming and error-prone. The rigid work templates from prior research are also not scalable to cover the inter and intra-class variability in historical schedule activities. This paper aims at fulfilling these needs via a new method to automatically learn construction knowledge from historical project planning and scheduling records and digitize such knowledge in a flexible and generalizable data schema. Specifically, we present Dynamic Process Templates (DPTs) based on a novel vector representation for construction activities where the sequencing knowledge is modeled with generative Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs). Our machine learning models are exhaustively tested and validated on a diverse dataset of 32 schedules obtained from real-world projects. The experimental results show our method is capable of learning planning and sequencing knowledge at high accuracy across different projects. The benefits for automated project planning and scheduling, schedule quality control, and automated generation of project look-aheads are discussed in detail.  相似文献   
52.
In the era of digitalization, there are many emerging technologies, such as the Internet of Things (IoT), Digital Twin (DT), Cloud Computing and Artificial Intelligence (AI), which are quickly developped and used in product design and development. Among those technologies, DT is one promising technology which has been widely used in different industries, especially manufacturing, to monitor the performance, optimize the progresses, simulate the results and predict the potential errors. DT also plays various roles within the whole product lifecycle from design, manufacturing, delivery, use and end-of-life. With the growing demands of individualized products and implementation of Industry 4.0, DT can provide an effective solution for future product design, development and innovation. This paper aims to figure out the current states of DT research focusing on product design and development through summarizing typical industrial cases. Challenges and potential applications of DT in product design and development are also discussed to inspire future studies.  相似文献   
53.
This study aims to measure the logistics economic efficiency of major first-tier cities in China and propose the interactive development plan of the green logistics industry based on the division of the urban agglomeration. In this study, the evaluation system for the logistics input and economic output of urban agglomerations is firstly established based on data-driven analytics, and a green logistics economic efficiency model using the dataset from 2008 to 2017 is constructed to estimate the input–output efficiency, development trend, and spatial differentiation of urban agglomerations comprehensively. Finally, the Shapley value method is adopted to obtain the specific distribution plan of logistics investment. The results show that the logistics economic efficiency of the 9 major urban agglomerations is all greater than 1 under the constant returns-to-scale (CRS) hypothesis while the average logistics economic efficiency of Pearl River Delta region, Chengdu-Chongqing region, and Shandong Peninsula region is significantly less than 1 under the medium returns-to-scale (MRS) hypothesis. The comprehensive input–output efficiency of five out of 9 major Chinese urban agglomerations showed a downward trend, with the highest declining rate of 5.9% in the Yangtze River Delta urban agglomeration. The urban agglomeration with the highest increase rate in input–output efficiency from 2013 to 2017 is the Chengdu-Chongqing region, which reached 3.97%.  相似文献   
54.
In general, preliminary or primary cost estimates are used to select contractors from among bidders in Japan. The primary cost estimate must be accurate, otherwise the contractor selected from the bidding process will lose profit. A general contractor in the world does not have a super-skilled engineer who can achieve the accurate primary cost estimates. The conventional primary estimate has a high error range and low reliability. An automated system converting detailed estimates to primary estimates has been highly demanded in the world. This paper presents a prototype AI converter that can accurately and automatically convert detailed cost estimates into primary estimates. Converting detailed cost estimates to primary estimates lies in a regression problem. This paper proposes a feature-elimination based data augmentation method for regression problems. The empirical experiment shows that the proposed data augmentation method is quite effective with an Extra-Trees ensemble method. The proposed method was empirically examined by using Colorado Department of Transportation (CDOT) dataset for accurately predicting constructions costs with the Extra-Trees algorithm and random forest algorithm respectively. The CDOT dataset is one and only one of the largest datasets available in public for constructions costs quotation/estimation of roads, bridges and buildings.  相似文献   
55.
Nanoparticles usually exhibit a specific structure and composition, which can influence the development of the microstructure during their sintering. Barium hexaferrite nanoplatelets have a specific, iron-rich structure defined by the termination at the surfaces with the S blocks of their SRS*R* hexaferrite structure (S and R represent a cubic (Fe6O8)2+ and a hexagonal (BaFe6O11)2− structural block, respectively). The unsubstituted and Sc-substituted hexaferrite nanoplatelets were hydrothermally synthesized and fired at different temperatures. A combination of morpho-structural analyses (XRD, SEM, TEM, and aberration-corrected STEM) and magnetic measurements was used to reveal the evolution of the microstructure during sintering. During the initial stages of sintering the nanoplatelets thicken predominantly by the fusion of individual original nanoplatelets. Due to the Fe-rich surfaces of the nanoplatelets, the fusion growth results in an inhomogeneity that leads to the formation of planar defects in the grains and the precipitation of Fe2O3 as the secondary phase. In the Sc-substituted hexaferrite grains, superstructural compositional ordering was detected for the first time. The Sc substitution caused exaggerated grain growth in barium hexaferrite ceramics sintered at 1300 °C.  相似文献   
56.
Measuring emotions is a real challenge for fundamental and applied research, especially in ecological contexts. de Wijk and Noldus propose combining two types of measures - explicit to characterize a specific food, and implicit -physiological- to capture the whole experience of a meal in real-life situations. This raises several challenges including development of new and miniaturized sensors and devices but also developing new ways of data analysis. We suggest a path to follow for future studies regarding data analysis: to include Data Science in the game. This field of research may enable developing predictive but also explicative models that link subjective experience of emotions and physiological responses in real-life contexts. We suggest that food scientists should go out of their comfort zone by collaborating with computer scientists and then be trained with the new tools of Data Science, which will undoubtedly enable them 1/ to better manage complex and heterogeneous data sets, 2/ to extract knowledge that will be essential to this field of research.  相似文献   
57.
朱高建 《中国矿业》2021,30(S1):182-183
针对采厂矿车翻卸人工计数可靠性低、安全风险大等问题,提出一种矿车翻卸自动计数系统设计方案,以JRF33铁路车号识别装置为载体,以RS485为通讯协议,对采集数据进行统计分析。结果显示,计数系统运行良好,翻卸数据采集及时、准确、完整。系统对采矿原料运输数据统计、分析应用和生产指导有一定的指导意义。  相似文献   
58.
The development of data-driven artificial intelligence technology has given birth to a variety of big data applications. Data has become an essential factor to improve these applications. Federated learning, a privacy-preserving machine learning method, is proposed to leverage data from different data owners. It is typically used in conjunction with cryptographic methods, in which data owners train the global model by sharing encrypted model updates. However, data encryption makes it difficult to identify the quality of these model updates. Malicious data owners may launch attacks such as data poisoning and free-riding. To defend against such attacks, it is necessary to find an approach to audit encrypted model updates. In this paper, we propose a blockchain-based audit approach for encrypted gradients. It uses a behavior chain to record the encrypted gradients from data owners, and an audit chain to evaluate the gradients’ quality. Specifically, we propose a privacy-preserving homomorphic noise mechanism in which the noise of each gradient sums to zero after aggregation, ensuring the availability of aggregated gradient. In addition, we design a joint audit algorithm that can locate malicious data owners without decrypting individual gradients. Through security analysis and experimental evaluation, we demonstrate that our approach can defend against malicious gradient attacks in federated learning.  相似文献   
59.
A new method is developed in the paper to estimate the maximum available capacity which is an important basis for indicating the State of Health (SOH) of lithium-ion batteries. Firstly, a data reconstruction approach is proposed to pre-process the acquired data to suppress the influence of measurement noise and reduce the negative impact on estimation precision when measuring equipment adopts different sampling frequencies. Then, the variation trend of the incremental capacity curve obtained based on the reconstructed data with the battery aging is analyzed, and a health indicator (HI) including multi-view features is put forward to characterize the battery degradation more comprehensively. The multi-view features are coming from the capacity increment curve versus voltage and time, including the maximum value of the capacity increment curve, the voltage corresponding to the maximum value, other values surrounding the maximum value and so on. Finally, Support Vector Regression is used to establish a model between the extracted HI and the maximum available capacity, and two types of open source data are used to verify the performance. The experimental results show that the data reconstruction method and multi-view health indicator proposed in the paper can obtain high precision estimation results.  相似文献   
60.
The rapid evolution of technology has led to the generation of high dimensional data streams in a wide range of fields, such as genomics, signal processing, and finance. The combination of the streaming scenario and high dimensionality is particularly challenging especially for the outlier detection task. This is due to the special characteristics of the data stream such as the concept drift, the limited time and space requirements, in addition to the impact of the well-known curse of dimensionality in high dimensional space. To the best of our knowledge, few studies have addressed these challenges simultaneously, and therefore detecting anomalies in this context requires a great deal of attention. The main objective of this work is to study the main approaches existing in the literature, to identify a set of comparison criteria, such as the computational cost and the interpretation of outliers, which will help us to reveal the different challenges and additional research directions associated with this problem. At the end of this study, we will draw up a summary report which summarizes the main limits identified and we will detail the different directions of research related to this issue in order to promote research for this community.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号